SeqNet: Learning Descriptors for Sequence-Based Hierarchical Place Recognition

نویسندگان

چکیده

Visual Place Recognition (VPR) is the task of matching current visual imagery from a camera to images stored in reference map environment. While initial VPR systems used simple direct image methods or hand-crafted features, recent work has focused on learning more powerful features and further improving performance through either some form sequential matcher / filter hierarchical process. In both cases single-image based system still far perfect, putting significant pressure sequence (in case systems) pose refinement stages. this paper we present novel hybrid that creates high match hypothesis generator using short learnt descriptors, which enable selective control score aggregation single descriptors. Sequential descriptors are generated temporal convolutional network dubbed SeqNet, encoding sequences 1-D convolutions, then matched against corresponding dataset provide an ordered list place hypotheses. We perform shortlisted separate pipeline produce overall hypothesis. Comprehensive experiments challenging benchmark datasets demonstrate proposed method outperforming state-of-the-art same amount information. Source code supplementary material can be found online. 1 1 [Online]. Available: https://github.com/oravus/seqNet.

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Real-Time Lidar-Based Place Recognition Using Distinctive Shape Descriptors

A key component in the emerging localization and mapping paradigm is an appearance-based place recognition algorithm that detects when a place has been revisited. This algorithm can run in the background at a low frame rate and be used to signal a global geometric mapping algorithm when a loop is detected. An optimization technique can then be used to correct the map by ‘closing the loop’. This...

متن کامل

Visual Place Recognition Using Landmark Distribution Descriptors

Recent work by Sünderhauf et al. [1] demonstrated improved visual place recognition using proposal regions coupled with features from convolutional neural networks (CNN) to match landmarks between views. In this work we extend the approach by introducing descriptors built from landmark features which also encode the spatial distribution of the landmarks within a view. Matching descriptors then ...

متن کامل

Place Sequence Learning for Navigation

A model of the hippocampus as a cognitive map inspired by the models of Burgess et al and Jensen et al is pro posed Simulations show that the resulting navigation behavior is as e cient as the behavior exhibited by previous models However the archi tecture of the proposed model and the mechanisms governing the tem poral characteristics of the neurons in the model are more realistic In particula...

متن کامل

Place Recognition Using Regional Point Descriptors for 3D Mapping

In order to operate in unstructured outdoor environments, globally consistent 3D maps are often required. In the absence of a absolute position sensor such as GPS or modifications to the environment, the ability to recognize previously observed locations is necessary to identify loop closures. Regional point or keypoint descriptors are a way to encode the structure within a small local region a...

متن کامل

Local and Global Descriptors for Place Recognition in Robotics

The simultaneous autolocalization and mapping of the environment is one of the most pressing problems of robotics. Among the existing SLAM algorithms, place recognition is a must for several cases. As an example, in multirobot SLAM we have several individual maps created by various robots. In order to combine them into one global map we have to identify common places before merging them. In thi...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: IEEE robotics and automation letters

سال: 2021

ISSN: ['2377-3766']

DOI: https://doi.org/10.1109/lra.2021.3067633